Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a mechanism to skip tests at runtime #38

Closed
wants to merge 4 commits into from

Conversation

bwidawsk
Copy link

This closes #19

It is useful to be able to not pass or fail a test. Typically, this is accomplished by using the ignored flag; that works fine when the determination can be made before the Trial is pushed to the runner. There are cases where that information may not be known until the test is actually run.

In order to preserve API, a interface is created to allow created a test that may be skipped based on runtime information.

There is certainly room for discussion on how this is implemented. Using Ok was selected here as in my case it makes sense for a skipped test to be considered passing. For others, an enumerated Failure type might make more sense.

This runner will inspect Ok values to determine if the test should be
ignored ("skipped" for our purposes means runtime ignore). This can be
seem in terse or json output directly. In order to keep default pretty
printing identical to libtest, the actual conclusion line is unmodified.
This change will help the ability to check if things run out of order.
Because tests may run out of order, it is sometimes useful to have lines
checked without specific ordering.
Copy link
Owner

@LukasKalbertodt LukasKalbertodt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR! Looks good to me overall.

Good idea of just adding a separate skippable_test function to make this a backwards-compatible change. In this 0.x state, I don't mind making breaking change releases, but if it can be avoided, that's good too.

Regarding the topic of Result/Option/custom enum: I am not a huge fan of the Result<Option<_>, _> solution. At least the inner option should probably be a custom enum { Skip { message: string }, Pass } or something like that. It would be even better to just have one enum Outcome2 { Passed, Skipped { message: String }, Failed(_) }. In that case I would rename the existing private Outcome to something else, like TrialOutcome. The problem with the Outcome approach is that errors inside test runners can't be propagated up with ? anymore :/ Try and FromResidual are still unstable. I am not sure how important the ability to use ? is, though. Any opinions regarding this whole issue?

Finally, regarding your test changes: i would probably not have modified assert_reordered_log to add this sloppy feature, but just hand coded assertions in the single test that uses it. Not a blocker for this PR, though.

@LukasKalbertodt
Copy link
Owner

Closing due to inactivity. Once OP replies, we can reopen. (Or if anyone wants to pick this up, you are welcome to open a new PR).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Runtime ignoring
2 participants